65 research outputs found
Building Efficient Software to Support Content Delivery Services
Many content delivery services use key components such as web servers, databases, and key-value stores to serve content over the Internet. These services, and their component systems, face unique modern challenges. Services now operate at massive scale, serving large files to wide user-bases. Additionally, resource contention is more prevalent than ever due to large file sizes, cloud-hosted and collocated services, and the use of resource-intensive features like content encryption. Existing systems have difficulty adapting to these challenges while still performing efficiently. For instance, streaming video web servers work well with small data, but struggle to service large, concurrent requests from disk. Our goal is to demonstrate how software can be augmented or replaced to help improve the performance and efficiency of select components of content delivery services.
We first introduce Libception, a system designed to help improve disk throughput for web servers that process numerous concurrent disk requests for large content. By using serialization and aggressive prefetching, Libception improves the throughput of the Apache and nginx web servers by a factor of 2 on FreeBSD and 2.5 on Linux when serving HTTP streaming video content. Notably, this improvement is achieved without changing the source code of either web server. We additionally show that Libception's benefits translate into performance gains for other workloads, reducing the runtime of a microbenchmark using the diff utility by 50% (again without modifying the application's source code).
We next implement Nessie, a distributed, RDMA-based, in-memory key-value store. Nessie decouples data from indexing metadata, and its protocol only consumes CPU on servers that initiate operations. This design makes Nessie resilient against CPU interference, allows it to perform well with large data values, and conserves energy during periods of non-peak load. We find that Nessie doubles throughput versus other approaches when CPU contention is introduced, and has 70% higher throughput when managing large data in write-oriented workloads. It also provides 41% power savings (over idle power consumption) versus other approaches when system load is at 20% of peak throughput.
Finally, we develop RocketStreams, a framework which facilitates the dissemination of live streaming video. RocketStreams exposes an easy-to-use API to applications, obviating the need for services to manually implement complicated data management and networking code. RocketStreams' TCP-based dissemination compares favourably to an alternative solution, reducing CPU utilization on delivery nodes by 54% and increasing viewer throughput by 27% versus the Redis data store. Additionally, when RDMA-enabled hardware is available, RocketStreams provides RDMA-based dissemination which further increases overall performance, decreasing CPU utilization by 95% and increasing concurrent viewer throughput by 55% versus Redis
MicroFuge: A Middleware Approach to Providing Performance Isolation in Cloud Storage Systems
Abstract-Most cloud providers improve resource utilization by having multiple tenants share the same resources. However, this comes at the cost of reduced isolation between tenants, which can lead to inconsistent and unpredictable performance. This performance variability is a significant impediment for tenants running services with strict latency deadlines. Providing predictable performance is particularly important for cloud storage systems. The storage system is the performance bottleneck for many cloud-based services and therefore often determines their overall performance characteristics. In this paper, we introduce MicroFuge, a new distributed caching and scheduling middleware that provides performance isolation for cloud storage systems. MicroFuge addresses the performance isolation problem by building an empiricallydriven performance model of the underlying storage system based on measured data. Using this model, MicroFuge reduces deadline misses through adaptive deadline-aware cache eviction, scheduling and load-balancing policies. MicroFuge can also perform early rejection of requests that are unlikely to make their deadlines. Using workloads from the YCSB benchmark on an EC2 deployment, we show that adding MicroFuge to the storage stack substantially reduces the deadline miss rate of a distributed storage system compared to using a deadline oblivious distributed caching middleware such as Memcached
The Meta-Position of Phe4 in Leu-Enkephalin Regulates Potency, Selectivity, Functional Activity, and Signaling Bias at the Delta and Mu Opioid Receptors
This work is licensed under a Creative Commons Attribution 4.0 International License.As tool compounds to study cardiac ischemia, the endogenous δ-opioid receptors (δOR) agonist Leu5-enkephalin and the more metabolically stable synthetic peptide (d-Ala2, d-Leu5)-enkephalin are frequently employed. However, both peptides have similar pharmacological profiles that restrict detailed investigation of the cellular mechanism of the δOR’s protective role during ischemic events. Thus, a need remains for δOR peptides with improved selectivity and unique signaling properties for investigating the specific roles for δOR signaling in cardiac ischemia. To this end, we explored substitution at the Phe4 position of Leu5-enkephalin for its ability to modulate receptor function and selectivity. Peptides were assessed for their affinity to bind to δORs and µ-opioid receptors (µORs) and potency to inhibit cAMP signaling and to recruit β-arrestin 2. Additionally, peptide stability was measured in rat plasma. Substitution of the meta-position of Phe4 of Leu5-enkephalin provided high-affinity ligands with varying levels of selectivity and bias at both the δOR and µOR and improved peptide stability, while substitution with picoline derivatives produced lower-affinity ligands with G protein biases at both receptors. Overall, these favorable substitutions at the meta-position of Phe4 may be combined with other modifications to Leu5-enkephalin to deliver improved agonists with finely tuned potency, selectivity, bias and drug-like properties
An Overview of the Management of Flexor Tendon Injuries
Flexor tendon injuries still remain a challenging condition to manage to ensure optimal outcome for the patient. Since the first flexor tendon repair was described by Kirchmayr in 1917, several approaches to flexor tendon injury have enabled successful repairs rates of 70-90%. Primary surgical repair results in better functional outcome compared to secondary repair or tendon graft surgery. Flexor tendon injury repair has been extensively researched and the literature demonstrates successful repair requires minimal gapping at the repair site or interference with tendon vascularity, secure suture knots, smooth junction of tendon end and having sufficient strength for healing. However, the exact surgical approach to achieve success being currently used among surgeons is still controversial. Therefore, this review aims to discuss the results of studies demonstrating the current knowledge regarding the optimal approach for flexor tendon repair. Post-operative rehabilitation for flexor tendon surgery is another area, which has caused extensive debate in hand surgery. The trend to more active mobilisation protocols seems to be favoured but further study in this area is needed to find the protocol, which achieves function and gliding but avoids rupture of the tendons. Lastly despite success following surgery complications commonly still occur post surgery, including adhesion formation, tendon rupture and stiffness of the joints. Therefore, this review aims to discuss the appropriate management of these difficulties post surgery. New techniques in management of flexor tendon will also be discussed including external laser devices, addition of growth factors and cytokines
Reflexivity, the picturing of selves, the forging of methods
This paper addresses alternative models for a reflexive methodology and examines the ways in
which doctoral students have appropriated these texts in their theses. It then considers the
indeterminate qualities of those appropriations. The paper offers a new account of reflexivity as
'picturing', drawing analogies from the interpretation of two very different pictures, by
Velázquez and Tshibumba. It concludes with a more open and fluid account of reflexivity,
offering the notion of 'signature', and drawing on the work of Gell and also Deleuze and
Guattari in relation to the inherently specific nature of 'concepts' situated in space and time
Recommended from our members
Predictors of Failed Conscious Sedation in Patients Undergoing an Outpatient Colonoscopy and Implications for the Adenoma Detection Rate
Guidelines to triage patients to conscious sedation (CS) or monitored anaesthesia care (MAC) for colonoscopy do not exist. We aimed to identify the CS failure rate, predictors of failure, and its impact on the adenoma detection rate (ADR). Strict (based on patient experience) and expanded (based on doses of sedative medications) definitions of CS failure were used. Patient and procedure-related variables were extracted. Multivariable logistic regression identified predictors for CS failure and the ADR. Among 766 patients, 29 (3.8%) and 175 (22.8%) patients failed CS by strict and expanded definitions, respectively. Female gender (OR 3.50; 95% CI: 1.37-8.94) and fellow involvement (OR 4.15; 95% CI: 1.79-9.58) were associated with failed CS by the strict definition. Younger age (OR 1.27, 95% CI: 1.07-1.49), outpatient opiate use (OR 1.71; 95% CI 1.03-2.84), use of an adjunct medication (OR 3.34; 95% CI: 1.94-5.73), and fellow involvement (OR 2.20; 95% CI: 1.31-3.71) were associated with failed CS by the expanded definition. Patients meeting strict failure criteria had a lower ADR (OR 0.30; 95% CI: 0.12-0.77). Several clinical factors may be useful for triaging to MAC. The ADR is lower in patients meeting strict criteria for failed CS.Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
NASA Runway Incursion Prevention System (RIPS) Dallas-Fort Worth Demonstration Performance Analysis
NASA's Aviation Safety Program Synthetic Vision System project conducted a Runway Incursion Prevention System (RIPS) flight test at the Dallas-Fort Worth International Airport in October 2000. The RIPS research system includes advanced displays, airport surveillance system, data links, positioning system, and alerting algorithms to provide pilots with enhanced situational awareness, supplemental guidance cues, a real-time display of traffic information, and warnings of runway incursions. This report describes the aircraft and ground based runway incursion alerting systems and traffic positioning systems (Automatic Dependent Surveillance - Broadcast (ADS-B) and Traffic Information Service - Broadcast (TIS-B)). A performance analysis of these systems is also presented
NASA/CR-2002-211677 NASA Runway Incursion Prevention System (RIPS) Dallas-Fort Worth Demonstration Performance Analysis
This report describes the test results of the runway incursion alerting systems recorded during the NASA Runway Incursion Prevention System (RIPS) testing at Dallas - Fort Worth International Airport (DFW) in October 2000. Both aircraft-based and ground-based runway incursion alerting were implemented and tested. The Runway Safety Monitor (RSM) and Runway Incursion Advisory and Alerting System (RIAAS) are aircraft-based runway incursion alerting systems. RSM was developed in-house by NASA. RIAAS was developed by Rannoch Corporation. The John A. Volpe National Transportation Systems Center (VTNSC) implemented the Ground-Based System (GBS). Prototype versions of RIAAS and RSM were installed on NASA's B757 aircraft (also called the Airborne Research Integrated Experiments System or ARIES
- …